`
200.00 EGP
The objective of the sequel is to prepare you to be a professional GenAI engineer/developer. I will take you from the ground-up in the realm of LLMs and GenAI, starting from the very basics to building working and production level apps. The spirit of the sequel is to be “hands-on”. All examples are code-based, with final projects, built step-by-step either in python, Google Colab, and deployed in streamlit. By the end of the courses sequel, you will have built chatgpt clone, Midjourney clone, Chat with your data app, Youtube assistant app, Ask YouTube Video, Study Mate App, Recommender system, Image Description App with GPT-V, Image Generation app with DALL-E and StableDiffusion, Video commentator app using Whisper and others. We will cover prompt engineering approaches, and use them to build custom apps, that go beyond what ChatGPT knows. We will use OpenAI APIs, LangChain and many other tools. We will build together different applications using streamlit as our UI and cloud deployment framework. Streamlit is known of its ease of use, and easy python coding. With the power of GPT models, either by OpenAI, or opensource (like Llama or Mixtral on Huggingface), we will be able to build interesting applications, like chatting with your documents, chatting with youtube videos, building a state-of-the art recommender systems, video auto commentator and translator from one voice to another. We will mix modalities, like Image with GPT-V and DALL-E, Text with ChatGPT: GPT3.5 and GPT4, and voice with Whisper. We will be able to feed the AI models with our custom data, that is not available on the internet, and open models like ChatGPT do not know about. We will cover advanced and state-of-the art topics like RAG models, and LLM Agents. You will also work with different kinds of LLMs, being opensource or not. You will get exposed to GPT models by OpenAI, Llama models by Meta, Gemini and Bard by Google, Orca by Microsoft, Mixtral by Mistral AI and others. You will use pre-trained models, and also finetune them on your own data. We will learn about huggingface and use it for model finetuining using the Parameter Efficient Training or PEFT models. We will use Low-Rank Adaptation or LoRA for efficient training. You will learn how to deploy a model in the cloud, or privately host it for privacy concerns of your company data. You will learn how to use existing pre-trained models as teachers and use model distillation to train your custom version of the model.
Python
Probability
Linear Algebra
CV
NLP
Generative AI Foundations
Transformers
GenAI and LLM foundations
OpenAI API basics
ChatGPT Clone in Streamlit
Prompt Engineering
Agents
OpenAI Assistant API
LangChain
Chat with Your Data App
Retrieval Augmented Generation (RAG) model
Vector Databases
Chat with Youtube video
Build Recommender system with LLMs
Midjourney Clone App with Streamlit with DALL-E
Automatic Captions generation App with GPT-V
Automatic Voice-Over App with GPT-V and Whisper
Youtube translator App with Whisper and GPT-4
Huggingface Transformers Review
OpenSource LLMs: Llama 2, Mixtral and others
Privately hosted LLMs: GPT4all Clone with streamlit
Small Language Models (SLM)
LLM fine tuning with Huggningface using QLoRA
LLM finetuning with RLHF
GPT-3 Finetuning
LLM finetuning with Distillation
Builld good awareness of cloud systems, to be able to use and deploy his models
Be able to build apps using pre-trained models via open source LLMs or OpenAI APIs
Be able to train/finetune the LLM in different scenarios, and aware of training methods like TL, PEFT/LoRA or RLHF.
Be able to build multi-modal apps with different LLM models including Text, Speech and Image
Be able to work with RAG models to augment LLM knowledge
Build excellent knowledge of the underlying mechanics of transformers, LLMs